Approaching Adversarial Example Classification with Chaos Theory
نویسندگان
چکیده
منابع مشابه
Chaos/Complexity Theory and Education
Sciences exist to demonstrate the fundamental order underlying nature. Chaos/complexity theory is a novel and amazing field of scientific inquiry. Notions of our everyday experiences are somehow in connection to the laws of nature through chaos/complexity theory’s concerns with the relationships between simplicity and complexity, between orderliness and randomness (Retrieved from http://www.inc...
متن کاملAdversarial queuing theory with setups
We look at routing and scheduling problems on Kelly type networks where the injection process is under the control of an adversary. The novelty of the model we consider is that the adversary injects requests of distinct types. Resources are subject to switch-over delays or setups when they begin servicing a new request class. In this new setting, we study the behavior of sensible policies as in...
متن کاملAdversarial classification: An adversarial risk analysis approach
Classification problems in security settings are usually contemplated as confrontations in which one or more adversaries try to fool a classifier to obtain a benefit. Most approaches to such adversarial classification problems have focused on game theoretical ideas with strong underlying common knowledge assumptions, which are actually not realistic in security domains. We provide an alternativ...
متن کاملDANCin SEQ2SEQ: Fooling Text Classifiers with Adversarial Text Example Generation
Machine learning models are powerful but fallible. Generating adversarial examples inputs deliberately crafted to cause model misclassification or other errors can yield important insight into model assumptions and vulnerabilities. Despite significant recent work on adversarial example generation targeting image classifiers, relatively little work exists exploring adversarial example generation...
متن کاملConvex Adversarial Collective Classification
In this paper, we present a novel method for robustly performing collective classification in the presence of a malicious adversary that can modify up to a fixed number of binary-valued attributes. Our method is formulated as a convex quadratic program that guarantees optimal weights against a worstcase adversary in polynomial time. In addition to increased robustness against active adversaries...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Entropy
سال: 2020
ISSN: 1099-4300
DOI: 10.3390/e22111201